270 research outputs found

    Network Essence: PageRank Completion and Centrality-Conforming Markov Chains

    Full text link
    Ji\v{r}\'i Matou\v{s}ek (1963-2015) had many breakthrough contributions in mathematics and algorithm design. His milestone results are not only profound but also elegant. By going beyond the original objects --- such as Euclidean spaces or linear programs --- Jirka found the essence of the challenging mathematical/algorithmic problems as well as beautiful solutions that were natural to him, but were surprising discoveries to the field. In this short exploration article, I will first share with readers my initial encounter with Jirka and discuss one of his fundamental geometric results from the early 1990s. In the age of social and information networks, I will then turn the discussion from geometric structures to network structures, attempting to take a humble step towards the holy grail of network science, that is to understand the network essence that underlies the observed sparse-and-multifaceted network data. I will discuss a simple result which summarizes some basic algebraic properties of personalized PageRank matrices. Unlike the traditional transitive closure of binary relations, the personalized PageRank matrices take "accumulated Markovian closure" of network data. Some of these algebraic properties are known in various contexts. But I hope featuring them together in a broader context will help to illustrate the desirable properties of this Markovian completion of networks, and motivate systematic developments of a network theory for understanding vast and ubiquitous multifaceted network data.Comment: In "A Journey Through Discrete Mathematics, A Tribute to Ji\v{r}\'i Matou\v{s}ek", Editors Martin Loebl, Jaroslav Ne\v{s}et\v{r}il and Robin Thomas, Springer International Publishing, 201

    Games on the Sperner Triangle

    Full text link
    We create a new two-player game on the Sperner Triangle based on Sperner's lemma. Our game has simple rules and several desirable properties. First, the game is always certain to have a winner. Second, like many other interesting games such as Hex and Geography, we prove that deciding whether one can win our game is a PSPACE-complete problem. Third, there is an elegant balance in the game such that neither the first nor the second player always has a decisive advantage. We provide a web-based version of the game, playable at: http://cs-people.bu.edu/paithan/spernerGame/ . In addition we propose other games, also based on fixed-point theorems.Comment: 18 pages, 19 figures. Uses paithan.st

    Interplay between Social Influence and Network Centrality: A Comparative Study on Shapley Centrality and Single-Node-Influence Centrality

    Full text link
    We study network centrality based on dynamic influence propagation models in social networks. To illustrate our integrated mathematical-algorithmic approach for understanding the fundamental interplay between dynamic influence processes and static network structures, we focus on two basic centrality measures: (a) Single Node Influence (SNI) centrality, which measures each node's significance by its influence spread; and (b) Shapley Centrality, which uses the Shapley value of the influence spread function --- formulated based on a fundamental cooperative-game-theoretical concept --- to measure the significance of nodes. We present a comprehensive comparative study of these two centrality measures. Mathematically, we present axiomatic characterizations, which precisely capture the essence of these two centrality measures and their fundamental differences. Algorithmically, we provide scalable algorithms for approximating them for a large family of social-influence instances. Empirically, we demonstrate their similarity and differences in a number of real-world social networks, as well as the efficiency of our scalable algorithms. Our results shed light on their applicability: SNI centrality is suitable for assessing individual influence in isolation while Shapley centrality assesses individuals' performance in group influence settings.Comment: The 10-page extended abstract version appears in WWW'201

    Paths Beyond Local Search: A Nearly Tight Bound for Randomized Fixed-Point Computation

    Full text link
    In 1983, Aldous proved that randomization can speedup local search. For example, it reduces the query complexity of local search over [1:n]^d from Theta (n^{d-1}) to O (d^{1/2}n^{d/2}). It remains open whether randomization helps fixed-point computation. Inspired by this open problem and recent advances on equilibrium computation, we have been fascinated by the following question: Is a fixed-point or an equilibrium fundamentally harder to find than a local optimum? In this paper, we give a nearly-tight bound of Omega(n)^{d-1} on the randomized query complexity for computing a fixed point of a discrete Brouwer function over [1:n]^d. Since the randomized query complexity of global optimization over [1:n]^d is Theta (n^{d}), the randomized query model over [1:n]^d strictly separates these three important search problems: Global optimization is harder than fixed-point computation, and fixed-point computation is harder than local search. Our result indeed demonstrates that randomization does not help much in fixed-point computation in the query model; the deterministic complexity of this problem is Theta (n^{d-1})

    A Complexity View of Markets with Social Influence

    Full text link
    In this paper, inspired by the work of Megiddo on the formation of preferences and strategic analysis, we consider an early market model studied in the field of economic theory, in which each trader's utility may be influenced by the bundles of goods obtained by her social neighbors. The goal of this paper is to understand and characterize the impact of social influence on the complexity of computing and approximating market equilibria. We present complexity-theoretic and algorithmic results for approximating market equilibria in this model with focus on two concrete influence models based on the traditional linear utility functions. Recall that an Arrow-Debreu market equilibrium in a conventional exchange market with linear utility functions can be computed in polynomial time by convex programming. Our complexity results show that even a bounded-degree, planar influence network can significantly increase the difficulty of equilibrium computation even in markets with only a constant number of goods. Our algorithmic results suggest that finding an approximate equilibrium in markets with hierarchical influence networks might be easier than that in markets with arbitrary neighborhood structures. By demonstrating a simple market with a constant number of goods and a bounded-degree, planar influence graph whose equilibrium is PPAD-hard to approximate, we also provide a counterexample to a common belief, which we refer to as the myth of a constant number of goods, that equilibria in markets with a constant number of goods are easy to compute or easy to approximate

    Solving Sparse, Symmetric, Diagonally-Dominant Linear Systems in Time O(m1.31)O (m^{1.31})

    Full text link
    We present a linear-system solver that, given an nn-by-nn symmetric positive semi-definite, diagonally dominant matrix AA with mm non-zero entries and an nn-vector \bb , produces a vector \xxt within relative distance ϵ\epsilon of the solution to A \xx = \bb in time O(m1.31log(nκf(A)/ϵ)O(1))O (m^{1.31} \log (n \kappa_{f} (A)/\epsilon)^{O (1)}), where κf(A)\kappa_{f} (A) is the log of the ratio of the largest to smallest non-zero eigenvalue of AA. In particular, log(κf(A))=O(blogn)\log (\kappa_{f} (A)) = O (b \log n), where bb is the logarithm of the ratio of the largest to smallest non-zero entry of AA. If the graph of AA has genus m2θm^{2\theta} or does not have a KmθK_{m^{\theta}} minor, then the exponent of mm can be improved to the minimum of 1+5θ1 + 5 \theta and (9/8)(1+θ)(9/8) (1+\theta). The key contribution of our work is an extension of Vaidya's techniques for constructing and analyzing combinatorial preconditioners.Comment: fixed a typo on page

    Smoothed analysis of algorithms

    Full text link
    Spielman and Teng introduced the smoothed analysis of algorithms to provide a framework in which one could explain the success in practice of algorithms and heuristics that could not be understood through the traditional worst-case and average-case analyses. In this talk, we survey some of the smoothed analyses that have been performed

    Nearly-Linear Time Algorithms for Preconditioning and Solving Symmetric, Diagonally Dominant Linear Systems

    Full text link
    We present a randomized algorithm that, on input a symmetric, weakly diagonally dominant n-by-n matrix A with m nonzero entries and an n-vector b, produces a y such that \norm{y - \pinv{A} b}_{A} \leq \epsilon \norm{\pinv{A} b}_{A} in expected time O(mlogcnlog(1/ϵ)),O (m \log^{c}n \log (1/\epsilon)), for some constant c. By applying this algorithm inside the inverse power method, we compute approximate Fiedler vectors in a similar amount of time. The algorithm applies subgraph preconditioners in a recursive fashion. These preconditioners improve upon the subgraph preconditioners first introduced by Vaidya (1990). For any symmetric, weakly diagonally-dominant matrix A with non-positive off-diagonal entries and k1k \geq 1, we construct in time O(mlogcn)O (m \log^{c} n) a preconditioner B of A with at most 2(n1)+O((m/k)log39n)2 (n - 1) + O ((m/k) \log^{39} n) nonzero off-diagonal entries such that the finite generalized condition number κf(A,B)\kappa_{f} (A,B) is at most k, for some other constant c. In the special case when the nonzero structure of the matrix is planar the corresponding linear system solver runs in expected time O(nlog2n+nlogn loglogn log(1/ϵ)) O (n \log^{2} n + n \log n \ \log \log n \ \log (1/\epsilon)). We hope that our introduction of algorithms of low asymptotic complexity will lead to the development of algorithms that are also fast in practice.Comment: This revised version contains a new section in which we prove that it suffices to carry out the computations with limited precisio

    Smoothed Analysis of Interior-Point Algorithms: Termination

    Full text link
    We perform a smoothed analysis of the termination phase of an interior-point method. By combining this analysis with the smoothed analysis of Renegar's interior-point algorithm by Dunagan, Spielman and Teng, we show that the smoothed complexity of an interior-point algorithm for linear programming is O(m3log(m/σ))O (m^{3} \log (m/\sigma)). In contrast, the best known bound on the worst-case complexity of linear programming is O(m3L)O (m^{3} L), where LL could be as large as mm. We include an introduction to smoothed analysis and a tutorial on proof techniques that have been useful in smoothed analyses.Comment: to be presented at the 2003 International Symposium on Mathematical Programmin

    Decision trees are PAC-learnable from most product distributions: a smoothed analysis

    Full text link
    We consider the problem of PAC-learning decision trees, i.e., learning a decision tree over the n-dimensional hypercube from independent random labeled examples. Despite significant effort, no polynomial-time algorithm is known for learning polynomial-sized decision trees (even trees of any super-constant size), even when examples are assumed to be drawn from the uniform distribution on {0,1}^n. We give an algorithm that learns arbitrary polynomial-sized decision trees for {\em most product distributions}. In particular, consider a random product distribution where the bias of each bit is chosen independently and uniformly from, say, [.49,.51]. Then with high probability over the parameters of the product distribution and the random examples drawn from it, the algorithm will learn any tree. More generally, in the spirit of smoothed analysis, we consider an arbitrary product distribution whose parameters are specified only up to a [-c,c] accuracy (perturbation), for an arbitrarily small positive constant c
    corecore